181 research outputs found

    The Harmonic Analysis of Kernel Functions

    Full text link
    Kernel-based methods have been recently introduced for linear system identification as an alternative to parametric prediction error methods. Adopting the Bayesian perspective, the impulse response is modeled as a non-stationary Gaussian process with zero mean and with a certain kernel (i.e. covariance) function. Choosing the kernel is one of the most challenging and important issues. In the present paper we introduce the harmonic analysis of this non-stationary process, and argue that this is an important tool which helps in designing such kernel. Furthermore, this analysis suggests also an effective way to approximate the kernel, which allows to reduce the computational burden of the identification procedure

    A Bayesian Approach to Sparse plus Low rank Network Identification

    Full text link
    We consider the problem of modeling multivariate time series with parsimonious dynamical models which can be represented as sparse dynamic Bayesian networks with few latent nodes. This structure translates into a sparse plus low rank model. In this paper, we propose a Gaussian regression approach to identify such a model

    Visual Representations: Defining Properties and Deep Approximations

    Full text link
    Visual representations are defined in terms of minimal sufficient statistics of visual data, for a class of tasks, that are also invariant to nuisance variability. Minimal sufficiency guarantees that we can store a representation in lieu of raw data with smallest complexity and no performance loss on the task at hand. Invariance guarantees that the statistic is constant with respect to uninformative transformations of the data. We derive analytical expressions for such representations and show they are related to feature descriptors commonly used in computer vision, as well as to convolutional neural networks. This link highlights the assumptions and approximations tacitly assumed by these methods and explains empirical practices such as clamping, pooling and joint normalization.Comment: UCLA CSD TR140023, Nov. 12, 2014, revised April 13, 2015, November 13, 2015, February 28, 201

    Maximum Entropy Vector Kernels for MIMO system identification

    Full text link
    Recent contributions have framed linear system identification as a nonparametric regularized inverse problem. Relying on â„“2\ell_2-type regularization which accounts for the stability and smoothness of the impulse response to be estimated, these approaches have been shown to be competitive w.r.t classical parametric methods. In this paper, adopting Maximum Entropy arguments, we derive a new â„“2\ell_2 penalty deriving from a vector-valued kernel; to do so we exploit the structure of the Hankel matrix, thus controlling at the same time complexity, measured by the McMillan degree, stability and smoothness of the identified models. As a special case we recover the nuclear norm penalty on the squared block Hankel matrix. In contrast with previous literature on reweighted nuclear norm penalties, our kernel is described by a small number of hyper-parameters, which are iteratively updated through marginal likelihood maximization; constraining the structure of the kernel acts as a (hyper)regularizer which helps controlling the effective degrees of freedom of our estimator. To optimize the marginal likelihood we adapt a Scaled Gradient Projection (SGP) algorithm which is proved to be significantly computationally cheaper than other first and second order off-the-shelf optimization methods. The paper also contains an extensive comparison with many state-of-the-art methods on several Monte-Carlo studies, which confirms the effectiveness of our procedure

    Identification of stable models via nonparametric prediction error methods

    Full text link
    A new Bayesian approach to linear system identification has been proposed in a series of recent papers. The main idea is to frame linear system identification as predictor estimation in an infinite dimensional space, with the aid of regularization/Bayesian techniques. This approach guarantees the identification of stable predictors based on the prediction error minimization. Unluckily, the stability of the predictors does not guarantee the stability of the impulse response of the system. In this paper we propose and compare various techniques to address this issue. Simulations results comparing these techniques will be provided.Comment: number of pages = 6, number of figures =

    Bayesian and regularization approaches to multivariable linear system identification: the role of rank penalties

    Full text link
    Recent developments in linear system identification have proposed the use of non-parameteric methods, relying on regularization strategies, to handle the so-called bias/variance trade-off. This paper introduces an impulse response estimator which relies on an â„“2\ell_2-type regularization including a rank-penalty derived using the log-det heuristic as a smooth approximation to the rank function. This allows to account for different properties of the estimated impulse response (e.g. smoothness and stability) while also penalizing high-complexity models. This also allows to account and enforce coupling between different input-output channels in MIMO systems. According to the Bayesian paradigm, the parameters defining the relative weight of the two regularization terms as well as the structure of the rank penalty are estimated optimizing the marginal likelihood. Once these hyperameters have been estimated, the impulse response estimate is available in closed form. Experiments show that the proposed method is superior to the estimator relying on the "classic" â„“2\ell_2-regularization alone as well as those based in atomic and nuclear norm.Comment: to appear in IEEE Conference on Decision and Control, 201

    Estimating Koopman operators for nonlinear dynamical systems: a nonparametric approach

    Full text link
    The Koopman operator is a mathematical tool that allows for a linear description of non-linear systems, but working in infinite dimensional spaces. Dynamic Mode Decomposition and Extended Dynamic Mode Decomposition are amongst the most popular finite dimensional approximation. In this paper we capture their core essence as a dual version of the same framework, incorporating them into the Kernel framework. To do so, we leverage the RKHS as a suitable space for learning the Koopman dynamics, thanks to its intrinsic finite-dimensional nature, shaped by data. We finally establish a strong link between kernel methods and Koopman operators, leading to the estimation of the latter through Kernel functions. We provide also simulations for comparison with standard procedures.Comment: Pre-print submitted for 19th IFAC Symposium, System Identification: learning models for decision and contro

    Estimating effective connectivity in linear brain network models

    Full text link
    Contemporary neuroscience has embraced network science to study the complex and self-organized structure of the human brain; one of the main outstanding issues is that of inferring from measure data, chiefly functional Magnetic Resonance Imaging (fMRI), the so-called effective connectivity in brain networks, that is the existing interactions among neuronal populations. This inverse problem is complicated by the fact that the BOLD (Blood Oxygenation Level Dependent) signal measured by fMRI represent a dynamic and nonlinear transformation (the hemodynamic response) of neuronal activity. In this paper, we consider resting state (rs) fMRI data; building upon a linear population model of the BOLD signal and a stochastic linear DCM model, the model parameters are estimated through an EM-type iterative procedure, which alternately estimates the neuronal activity by means of the Rauch-Tung-Striebel (RTS) smoother, updates the connections among neuronal states and refines the parameters of the hemodynamic model; sparsity in the interconnection structure is favoured using an iteratively reweighting scheme. Experimental results using rs-fMRI data are shown demonstrating the effectiveness of our approach and comparison with state of the art routines (SPM12 toolbox) is provided

    Online semi-parametric learning for inverse dynamics modeling

    Full text link
    This paper presents a semi-parametric algorithm for online learning of a robot inverse dynamics model. It combines the strength of the parametric and non-parametric modeling. The former exploits the rigid body dynamics equa- tion, while the latter exploits a suitable kernel function. We provide an extensive comparison with other methods from the literature using real data from the iCub humanoid robot. In doing so we also compare two different techniques, namely cross validation and marginal likelihood optimization, for estimating the hyperparameters of the kernel function
    • …
    corecore